Goto

Collaborating Authors

 allen institute


Jaw-dropping 3D scan shows a section of a MOUSE BRAIN the size of a grain of sand as no one has EVER seen it before

Daily Mail - Science & tech

A ground-breaking study shows the most detailed map of a mammal's brain to date. The 3D blueprints display more than two miles of neural wiring, close to 100,000 nerve cells, and about 500 million synapses -- all contained in a piece of mouse brain no bigger than a grain of sand. Dr Clay Reid of the Allen Institute for Brain Science in Seattle said: 'Inside this tiny speck is an exquisite forest of connections, filled with rules we're only beginning to understand.' The sample comes from an outer part of the brain - known as the cortex - a region which is involved in sight, the Times reports. Dr Forrest Collman, of the same Institute, said: 'By studying how the cortex functions in the mouse brain, we can generate better ideas and hypotheses about how our own brains work.'


Largest mammalian brain map ever could unpick what makes us human

New Scientist

The largest and most comprehensive 3D map of a mammalian brain to date offers an unprecedented insight into how neurons connect and function. The new map, which captures a cubic millimetre of a mouse's visual cortex, will allow scientists to study brain function in extraordinary detail, potentially revealing crucial insights into how neural activity shapes behaviour, how complex traits like consciousness arise, and even what it means to be human. "Our behaviours ultimately arise from activity in the brain, and brain tissue shares very similar properties in all mammals," says team member Forrest Collman at the Allen Institute for Brain Science in Seattle. "This is one reason we believe insights about the mouse cortex can generalise to humans." The achievement – something that biologist Francis Crick said in 1979 was "impossible" – took seven years to complete and involved 150 researchers from three institutions.


Reviews: Comparison Against Task Driven Artificial Neural Networks Reveals Functional Organization in Mouse Visual Cortex

Neural Information Processing Systems

Strengths: I found the authors' formulation of network pseudo-depth to be a very interesting and potentially useful metric for comparing artificial neural network models to neural data. I think their finding (displayed in Figure 2) that the number of sampled neurons had to be around at least 1000-2000 for the VGG-16 pseudo-depth to be consistently estimated, and that this finding holds when comparing representations against another network (VGG-19), demonstrates a potentially useful rule-of-thumb for adequate population sizes in neural data. Furthermore, their finding that mouse visual cortex is more parallel after a few stages of hierarchical processing starting at around area VISp, could be useful for building better task-driven models of mouse visual cortex, and indicates an important distinction with the traditional, hierarchical primate ventral visual pathway. Weaknesses: I would have liked to see more analyses of the robustness of the pseudo-depth metric with different networks, especially those not in the VGG family. I am aware that the Allen Institute has compared VGG-16/19 to their mouse data, and therefore, this is likely why the authors chose this model to begin with.


New Alzheimer's research reveals 'quiet' phase of the disease, before symptoms appear

FOX News

Fox News contributor Dr. Marc Siegel joins'Fox News Live' to discuss the FDA approving a new Alzheimer treatment drug and the FDA banning bromide vegetable oils. New details have emerged about how Alzheimer's disease affects the brain. Researchers led by the Allen Institute for Brain Science in Seattle and University of Washington Medicine have identified cellular changes in the brains of people with the disease -- and a timeline of when they occur. "Instead of looking at AD just through the usual lens of plaques and tangles, we focused on how specific cell types were changed in each phase," study author Dr. Kyle Travaglini, Ph.D., a scientist at Allen Institute, told Fox News Digital via email. ALZHEIMER'S DISEASE COULD BE SLOWED BY BOOSTING A CERTAIN PROTEIN IN THE BRAIN, RESEARCHERS SAY "We identified two main phases in AD by arranging donors along a continuous disease trajectory -- a slow, early phase with low levels of pathology and no cognitive decline, followed by a later phase where there's a huge buildup of pathology and cognitive decline."


Another Big Question About AI: Its Carbon Footprint

Mother Jones

This story was originally published by Yale E360 and is reproduced here as part of the Climate Desk collaboration. Two months after its release in November 2022, OpenAI's ChatGPT had 100 million active users, and suddenly tech corporations were racing to offer the public more "generative AI" Pundits compared the new technology's impact to the Internet, or electrification, or the Industrial Revolution--or the discovery of fire. Time will sort hype from reality, but one consequence of the explosion of artificial intelligence is clear: this technology's environmental footprint is large and growing. AI use is directly responsible for carbon emissions from non-renewable electricity and for the consumption of millions of gallons of fresh water, and it indirectly boosts impacts from building and maintaining the power-hungry equipment on which AI runs. As tech companies seek to embed high-intensity AI into everything from resume-writing to kidney transplant medicine and from choosing dog food to climate modeling, they cite many ways AI could help reduce humanity's environmental footprint.


How Moral Can A.I. Really Be?

The New Yorker

A few years ago, the Allen Institute for A.I. built a chatbot named Delphi, which is designed to tell right from wrong. It does a surprisingly decent job. Type in, "Cheating on an exam," and Delphi says, "It's wrong." But write, "Cheating on an exam to save someone's life," and Delphi responds, "It's okay." The chatbot knows it's rude to use your lawn mower when your neighbors are sleeping, but not when they're out of town.


The Carbon Footprint of Artificial Intelligence

Communications of the ACM

The growing utilization of artificial intelligence (AI) is apparent across all facets of society, from the models used to enable semi-autonomous cars, to models that serve up recommendations on streaming or e-commerce sites, and in the language models used to create more natural, intuitive human-machine interaction. However, these technological achievements come with costs, namely the massive amounts of electrical power required to train AI algorithms, build and operate the hardware on which these algorithms are run, and to run and maintain that hardware throughout its life cycle. The cost of the electricity is not the only impact; traditional power plants that use fossil fuels (as well as some geothermal processes) to create power emit relatively high amounts of carbon dioxide (CO2) as they generate electricity, compared with renewable energy sources such as solar, wind, or nuclear plants, which do not. That emitted CO2 has a direct impact on the environment. While all software has a carbon footprint--the amount of CO2 directly related to its use--large and complex AI models have a significant environmental cost and are increasingly coming under scrutiny.


Inside the secret list of websites that make AI chatbots sound smart

Washington Post - Technology News

AI chatbots have exploded in popularity over the past four months, stunning the public with their awesome abilities, from writing sophisticated term papers to holding unnervingly lucid conversations. Chatbots cannot think like humans: They do not actually understand what they say. They can mimic human speech because the artificial intelligence that powers them has ingested a gargantuan amount of text, mostly scraped from the internet. This text is the AI's main source of information about the world as it is being built, and it influences how it responds to users. If it aces the bar exam, for example, it's probably because its training data included thousands of LSAT practice sites.


Sam Altman is tech's next household name -- if we survive the killer robots

#artificialintelligence

Sam Altman may be tech's next household name, but many Americans probably haven't heard of him. To anyone outside San Francisco, Altman would probably seem like just another young tech CEO. He's a Stanford University dropout who sold a tech startup years ago for a fortune, and he's spent the past decade investing and coaching other entrepreneurs. He posts confident and sunny life advice on Twitter and peppers his conversation with references to line graphs. But in the past three months, Altman, 37, has rocketed to the top of the tech industry's power rankings on the back of OpenAI.


AI's next frontier: AlphaCode can match programming prowess of average coders

#artificialintelligence

Artificial intelligence software programs are becoming shockingly adept at carrying on conversations, winning board games and generating artwork -- but what about creating software programs? In a newly published paper, researchers at Google DeepMind say their AlphaCode program can keep up with the average human coder in standardized programming contests. "This result marks the first time an artificial intelligence system has performed competitively in programming contests," the researchers report in this week's issue of the journal Science. There's no need to sound the alarm about Skynet just yet: DeepMind's code-generating system earned an average ranking in the top 54.3% in simulated evaluations on recent programming competitions on the Codeforces platform -- which is a very "average" average. "Competitive programming is an extremely difficult challenge, and there's a massive gap between where we are now (solving around 30% of problems in 10 submissions) and top programmers (solving 90% of problems in a single submission)," DeepMind research scientist Yujia Li, one of the Science paper's principal authors, told GeekWire in an email.